249 research outputs found

    A time frequency analysis of wave packet fractional revivals

    Full text link
    We show that the time frequency analysis of the autocorrelation function is, in many ways, a more appropriate tool to resolve fractional revivals of a wave packet than the usual time domain analysis. This advantage is crucial in reconstructing the initial state of the wave packet when its coherent structure is short-lived and decays before it is fully revived. Our calculations are based on the model example of fractional revivals in a Rydberg wave packet of circular states. We end by providing an analytical investigation which fully agrees with our numerical observations on the utility of time-frequency analysis in the study of wave packet fractional revivals.Comment: 9 pages, 4 figure

    The power of teams that disagree:team formation in large action spaces

    Get PDF
    Recent work has shown that diverse teams can outperform a uniform team made of copies of the best agent. However, there are fundamental questions that were never asked before. When should we use diverse or uniform teams? How does the performance change as the action space or the teams get larger? Hence, we present a new model of diversity, where we prove that the performance of a diverse team improves as the size of the action space increases. Moreover, we show that the performance converges exponentially fast to the optimal one as we increase the number of agents. We present synthetic experiments that give further insights: even though a diverse team outperforms a uniform team when the size of the action space increases, the uniform team will eventually again play better than the diverse team for a large enough action space. We verify our predictions in a system of Go playing agents, where a diverse team improves in performance as the board size increases, and eventually overcomes a uniform team

    A Parallel Incremental Learning Algorithm for Neural Networks with Fault Tolerance

    No full text
    URL : http://vecpar.fe.up.pt/2008/papers/46.pdfInternational audienceThis paper presents a parallel and fault tolerant version of an incremental learning algorithm for feed-forward neural networks used as function approximators. It has been shown in previous works that our incremental algorithm builds networks of reduced size while providing high quality approximations for real data sets. However, for very large sets, the use of our learning process on a single machine may be quite long and even sometimes impossible, due to memory limitations. The parallel algorithm presented in this paper is usable in any parallel system, and in particular, with large dynamical systems such as clusters and grids in which faults may occur. Finally, the quality and performances (without and with faults) of that algorithm are experimentally evaluated

    Learning latent representations across multiple data domains using Lifelong VAEGAN

    Get PDF
    The problem of catastrophic forgetting occurs in deep learning models trained on multiple databases in a sequential manner. Recently, generative replay mechanisms (GRM), have been proposed to reproduce previously learned knowledge aiming to reduce the forgetting. However, such approaches lack an appropriate inference model and therefore can not provide latent representations of data. In this paper, we propose a novel lifelong learning approach, namely the Lifelong VAEGAN (L-VAEGAN), which not only induces a powerful generative replay network but also learns meaningful latent representations, benefiting representation learning. L-VAEGAN can allow to automatically embed the information associated with different domains into several clusters in the latent space, while also capturing semantically meaningful shared latent variables, across different data domains. The proposed model supports many downstream tasks that traditional generative replay methods can not, including interpolation and inference across different data domains

    Evidential Bagging: Combining Heterogeneous Classifiers in the Belief Functions Framework

    Get PDF
    International audienceIn machine learning, Ensemble Learning methodologies are known to improve predictive accuracy and robustness. They consist in the learning of many classifiers that produce outputs which are finally combined according to different techniques. Bagging, or Bootstrap Aggre-gating, is one of the most famous Ensemble methodologies and is usually applied to the same classification base algorithm, i.e. the same type of classifier is learnt multiple times on bootstrapped versions of the initial learning dataset. In this paper, we propose a bagging methodology that involves different types of classifier. Classifiers' probabilist outputs are used to build mass functions which are further combined within the belief functions framework. Three different ways of building mass functions are proposed; preliminary experiments on benchmark datasets showing the relevancy of the approach are presented

    How does a dictation machine recognize speech?

    Get PDF
    There is magic (or is it witchcraft?) in a speech recognizer that transcribes continuous radio speech into text with a word accuracy of even not more than 50%. The extreme difficulty of this task, tough, is usually not perceived by the general public. This is because we are almost deaf to the infinite acoustic variations that accompany the production of vocal sounds, which arise from physiological constraints (co-articulation), but also from the acoustic environment (additive or convolutional noise, Lombard effect), or from the emotional state of the speaker (voice quality, speaking rate, hesitations, etc.)46. Our consciousness of speech is indeed not stimulated until after it has been processed by our brain to make it appear as a sequence of meaningful units: phonemes and words. In this Chapter we will see how statistical pattern recognition and statistical sequence recognition techniques are currently used for trying to mimic this extraordinary faculty of our mind (4.1). We will follow, in Section 4.2, with a MATLAB-based proof of concept of word-based automatic speech recognition (ASR) based on Hidden Markov Models (HMM), using a bigram model for modeling (syntactic-semantic) language constraints

    Identifying hazardousness of sewer pipeline gas mixture using classification methods: a comparative study

    Get PDF
    In this work, we formulated a real-world problem related to sewer pipeline gas detection using the classification-based approaches. The primary goal of this work was to identify the hazardousness of sewer pipeline to offer safe and non-hazardous access to sewer pipeline workers so that the human fatalities, which occurs due to the toxic exposure of sewer gas components, can be avoided. The dataset acquired through laboratory tests, experiments, and various literature sources was organized to design a predictive model that was able to identify/classify hazardous and non-hazardous situation of sewer pipeline. To design such prediction model, several classification algorithms were used and their performances were evaluated and compared, both empirically and statistically, over the collected dataset. In addition, the performances of several ensemble methods were analyzed to understand the extent of improvement offered by these methods. The result of this comprehensive study showed that the instance-based learning algorithm performed better than many other algorithms such as multilayer perceptron, radial basis function network, support vector machine, reduced pruning tree. Similarly, it was observed that multi-scheme ensemble approach enhanced the performance of base predictors
    corecore